Something that I forgot to mention, which tends to strike particularly wrong chord: assignation of zero moral value to AI’s experiences.
That’s not so much an assumption as an initial action plan. Many of the denizens here don’ t want to build artificial people initially. They do want an artificial moral agent—but not one whose experiences are regarded as being intrinsicallly valuable—at least not straight away.
Of course you could build agents with valued experiences—the issue is more whether it is a good idea to do so initially. If you start with a non-person, you could still wind up building synthetic people eventually—if it was agreed that doing so was a good idea.
If you look at something like the iRobot movie, those robots were’t valued much there either. Machines will probably start out being enslaved by humans, not valued as peers.
That’s not so much an assumption as an initial action plan. Many of the denizens here don’ t want to build artificial people initially. They do want an artificial moral agent—but not one whose experiences are regarded as being intrinsicallly valuable—at least not straight away.
Of course you could build agents with valued experiences—the issue is more whether it is a good idea to do so initially. If you start with a non-person, you could still wind up building synthetic people eventually—if it was agreed that doing so was a good idea.
If you look at something like the iRobot movie, those robots were’t valued much there either. Machines will probably start out being enslaved by humans, not valued as peers.